home All News open_in_new Read More

Can AI sandbag safety checks to sabotage users? Yes, but not very well — for now

AI companies claim to have robust safety checks in place that ensure that models don’t say or do weird, illegal, or unsafe stuff. But what if the models were capable of evading those checks and, for some reason, trying to sabotage or mislead users? Turns out they can do this, according to Anthropic researchers. Just […] © 2024 TechCrunch. All rights reserved. For personal use only.

today 6 hours ago attach_file Politics
attach_file Events
attach_file Events
attach_file Politics
attach_file Events
attach_file Politics


ID: 12887343
Add Watch Country

arrow_drop_down